242 research outputs found

    Measuring Engagement in Robot-Assisted Autism Therapy: A Cross-Cultural Study

    Get PDF
    During occupational therapy for children with autism, it is often necessary to elicit and maintain engagement for the children to benefit from the session. Recently, social robots have been used for this; however, existing robots lack the ability to autonomously recognize the children’s level of engagement, which is necessary when choosing an optimal interaction strategy. Progress in automated engagement reading has been impeded in part due to a lack of studies on child-robot engagement in autism therapy. While it is well known that there are large individual differences in autism, little is known about how these vary across cultures. To this end, we analyzed the engagement of children (age 3–13) from two different cultural backgrounds: Asia (Japan, n = 17) and Eastern Europe (Serbia, n = 19). The children participated in a 25 min therapy session during which we studied the relationship between the children’s behavioral engagement (task-driven) and different facets of affective engagement (valence and arousal). Although our results indicate that there are statistically significant differences in engagement displays in the two groups, it is difficult to make any causal claims about these differences due to the large variation in age and behavioral severity of the children in the study. However, our exploratory analysis reveals important associations between target engagement and perceived levels of valence and arousal, indicating that these can be used as a proxy for the children’s engagement during the therapy. We provide suggestions on how this can be leveraged to optimize social robots for autism therapy, while taking into account cultural differences.MEXT Grant-in-Aid for Young Scientists B (grant no. 16763279)Chubu University Grant I (grant no. 27IS04I (Japan))European Union. HORIZON 2020 (grant agreement no. 701236 (ENGAGEME))European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (Individual Fellowship)European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (grant agreement no. 688835 (DE-ENIGMA)

    Recognising Covid-19 from coughing using ensembles of SVMs and LSTMs with handcrafted and deep audio features

    Get PDF

    The DiCOVA 2021 Challenge: an encoder-decoder approach for COVID-19 recognition from coughing audio

    Get PDF

    Learning Audio Sequence Representations for Acoustic Event Classification

    Full text link
    Acoustic Event Classification (AEC) has become a significant task for machines to perceive the surrounding auditory scene. However, extracting effective representations that capture the underlying characteristics of the acoustic events is still challenging. Previous methods mainly focused on designing the audio features in a 'hand-crafted' manner. Interestingly, data-learnt features have been recently reported to show better performance. Up to now, these were only considered on the frame-level. In this paper, we propose an unsupervised learning framework to learn a vector representation of an audio sequence for AEC. This framework consists of a Recurrent Neural Network (RNN) encoder and a RNN decoder, which respectively transforms the variable-length audio sequence into a fixed-length vector and reconstructs the input sequence on the generated vector. After training the encoder-decoder, we feed the audio sequences to the encoder and then take the learnt vectors as the audio sequence representations. Compared with previous methods, the proposed method can not only deal with the problem of arbitrary-lengths of audio streams, but also learn the salient information of the sequence. Extensive evaluation on a large-size acoustic event database is performed, and the empirical results demonstrate that the learnt audio sequence representation yields a significant performance improvement by a large margin compared with other state-of-the-art hand-crafted sequence features for AEC

    Deep learning post-earnings-announcement drift

    Get PDF

    Deep speaker conditioning for speech emotion recognition

    Get PDF

    Recent advances in computer audition for diagnosing COVID-19: an overview

    Get PDF
    corecore